Learning via variably scaled kernels

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Interpolation with Variably Scaled Kernels

Within kernel–based interpolation and its many applications, it is a well–documented but unsolved problem to handle the scaling or the shape parameter. We consider native spaces whose kernels allow us to change the kernel scale of a d–variate interpolation problem locally, depending on the requirements of the application. The trick is to define a scale function c on the domain Ω ⊂ Rd to transfo...

متن کامل

Interpolating functions with gradient discontinuities via Variably Scaled Kernels

In kernel–based methods, how to handle the scaling or the choice of the shape parameter is a well– documented but still an open problem. The shape or scale parameter can be tuned by the user according to the applications, and it plays a crucial role both for the accuracy of the method and its stability. In [7], the Variably Scaled Kernels (VSKs) were introduced. The idea is to vary the scale in...

متن کامل

Convex Deep Learning via Normalized Kernels

Deep learning has been a long standing pursuit in machine learning, which until recently was hampered by unreliable training methods before the discovery of improved heuristics for embedded layer training. A complementary research strategy is to develop alternative modeling architectures that admit efficient training methods while expanding the range of representable structures toward deep mode...

متن کامل

Learning Kernels via Margin-and-Radius Ratios

Despite the great success of SVM, it is usually difficult for users to select suitable kernels for SVM classifiers. Kernel learning has been developed to jointly learn both a kernel and an SVM classifier [1]. Most existing kernel learning approaches, e.g., [2, 3, 4], employ the margin based formulation, equivalent to: mink,w,b,ξi 1 2‖w‖ 2 + C ∑ i ξi, s.t. yi〈φ(xi; k), w〉+ b+ ξi ≥ 1, ξi ≥ 0, (1)...

متن کامل

Learning Gaussian Process Kernels via Hierarchical Bayes

We present a novel method for learning with Gaussian process regression in a hierarchical Bayesian framework. In a first step, kernel matrices on a fixed set of input points are learned from data using a simple and efficient EM algorithm. This step is nonparametric, in that it does not require a parametric form of covariance function. In a second step, kernel functions are fitted to approximate...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Advances in Computational Mathematics

سال: 2021

ISSN: 1019-7168,1572-9044

DOI: 10.1007/s10444-021-09875-6